263 research outputs found
Real-time Terrain Rendering using Smooth Hardware Optimized Level of Detail
We present a method for real-time level of detail reduction that is able to display high-complexity polygonal
surface data. A compact and efficient regular grid representation is used. The method is optimized for
modern, low-end consumer 3D graphics cards. We avoid sudden changes of the geometry - also known as
'popping', when reducing the geometry by exploiting the low-level hardware programmability in order to
maintain interactive framerates. Terrain models are repolygonized in order to minimizing the visible error.
Furthermore, the method minimizes CPU usage during rendering and requires minimal pre-processing. We
believe that this is the first time that a smooth level of detail has been implemented in commodity hardware
Optimizing Photon Mapping Using Multiple Photon Maps for Irradiance Estimates
The photon mapping method is used extensively in global illumination to render photorealistic pictures. We describe a simple optimization technique for calculating the indirect illumination by modifying the photon mapping method. Using our method the photon maps are divided into several photon maps based on the topology of the polygons in the scene. This modification of the photon mapping method has several advantages compared to the traditional method. We demonstrate that the indirect illumination can be calculated faster using our method
Efficient light scattering through thin semi-transparent objects
This paper concerns real-time rendering of thin semi-transparent objects. An object in this category could be a piece of cloth, eg. a curtain. Semi-transparent objects are visualized most correctly using volume rendering techniques. In general such techniques are, however, intractable for real-time applications. Surface rendering is more efficient, but also inadequate since semi-transparent objects should have a different appearance depending on whether they are front-lit or back-lit. The back-lit side of a curtain, for example, often seems quite transparent while the front-lit side seems brighter and almost opaque. To capture such visual effects in the standard rendering pipeline, Blinn [1982] proposed an efficient local illumination model based on radiative transfer theory. He assumed media of low density, hence, his equations can render media such as clouds, smoke, and dusty surfaces. Our observation is that Chandrasekhar [1960] has derived the same equations from a different set of assumptions. This alternative derivation makes the theory useful for realistic real-time rendering of dense, but thin, semitransparent objects such as cloth. We demonstrate that application of the theory in this new area gives far better results than what is obtainable with a traditional real-time rendering scheme using a constant factor for alpha blending
- …